Goto

Collaborating Authors

 folk psychology


Does ChatGPT Have a Mind?

Goldstein, Simon, Levinstein, Benjamin A.

arXiv.org Artificial Intelligence

This paper examines the question of whether Large Language Models (LLMs) like ChatGPT possess minds, focusing specifically on whether they have a genuine folk psychology encompassing beliefs, desires, and intentions. We approach this question by investigating two key aspects: internal representations and dispositions to act. First, we survey various philosophical theories of representation, including informational, causal, structural, and teleosemantic accounts, arguing that LLMs satisfy key conditions proposed by each. We draw on recent interpretability research in machine learning to support these claims. Second, we explore whether LLMs exhibit robust dispositions to perform actions, a necessary component of folk psychology. We consider two prominent philosophical traditions, interpretationism and representationalism, to assess LLM action dispositions. While we find evidence suggesting LLMs may satisfy some criteria for having a mind, particularly in game-theoretic environments, we conclude that the data remains inconclusive. Additionally, we reply to several skeptical challenges to LLM folk psychology, including issues of sensory grounding, the "stochastic parrots" argument, and concerns about memorization. Our paper has three main upshots. First, LLMs do have robust internal representations. Second, there is an open question to answer about whether LLMs have robust action dispositions. Third, existing skeptical challenges to LLM representation do not survive philosophical scrutiny.


Taking the Intentional Stance Seriously, or "Intending" to Improve Cognitive Systems

Bridewell, Will

arXiv.org Artificial Intelligence

Finding claims that researchers have made considerable progress in artificial intelligence over the last several decades is easy. However, our everyday interactions with cognitive systems (e.g., Siri, Alexa, DALL-E) quickly move from intriguing to frustrating. One cause of those frustrations rests in a mismatch between the expectations we have due to our inherent, folk-psychological theories and the real limitations we experience with existing computer programs. The software does not understand that people have goals, beliefs about how to achieve those goals, and intentions to act accordingly. One way to align cognitive systems with our expectations is to imbue them with mental states that mirror those we use to predict and explain human behavior. This paper discusses these concerns and illustrates the challenge of following this route by analyzing the mental state 'intention.' That analysis is joined with high-level methodological suggestions that support progress in this endeavor.